proximal update
Residual-as-Teacher: Mitigating Bias Propagation in Student--Teacher Estimation
Yamamoto, Kakei, Wainwright, Martin J.
We study statistical estimation in a student--teacher setting, where predictions from a pre-trained teacher are used to guide a student model. A standard approach is to train the student to directly match the teacher's outputs, which we refer to as student soft matching (SM). This approach directly propagates any systematic bias or mis-specification present in the teacher, thereby degrading the student's predictions. We propose and analyze an alternative scheme, known as residual-as-teacher (RaT), in which the teacher is used to estimate residuals in the student's predictions. Our analysis shows how the student can thereby emulate a proximal gradient scheme for solving an oracle optimization problem, and this provably reduces the effect of teacher bias. For general student--teacher pairs, we establish non-asymptotic excess risk bounds for any RaT fixed point, along with convergence guarantees for the student-teacher iterative scheme. For kernel-based student--teacher pairs, we prove a sharp separation: the RaT method achieves the minimax-optimal rate, while the SM method incurs constant prediction error for any sample size. Experiments on both synthetic data and ImageNette classification under covariate shift corroborate our theoretical findings.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Research Report > New Finding (0.46)
- Research Report > Experimental Study (0.46)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
- North America > United States > California > Alameda County > Berkeley (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada (0.04)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > California > Los Angeles County > Long Beach (0.04)
- North America > Canada > Ontario > Waterloo Region > Waterloo (0.04)
- Europe > Hungary > Budapest > Budapest (0.04)
10 Appendix 10.1 Pseudo-code for DQN Pro Below, we present the pseudo-code for DQN Pro. Notice that the difference between DQN and DQN
Below, we present the pseudo-code for DQN Pro. Pro is minimal (highlighted in gray). Sticky actions True Optimizer Adam Kingma & Ba (2015) Network architecture Nature DQN network Mnih et al. (2015) Random seeds { 0, 1, 2, 3, 4 } Rainbow hyper-parameters (shared) Batch size 64 Other Config file rainbow_aaai.gin Theorem 2. Consider the PMPI algorithm specified by: We make two assumptions: 1. we assume error in policy evaluation step, as already stated in equation (4). All results are averaged over 5 independent seeds.
- Europe > Russia (0.04)
- Europe > France (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- (2 more...)
- North America > United States > California > Alameda County > Berkeley (0.14)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- North America > Canada (0.04)
- Europe > Russia (0.04)
- Europe > France (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- (2 more...)